Goto

Collaborating Authors

 trade-off revisited



Unlocking Fairness: a Trade-off Revisited

Neural Information Processing Systems

The prevailing wisdom is that a model's fairness and its accuracy are in tension with one another. However, there is a pernicious {\em modeling-evaluating dualism} bedeviling fair machine learning in which phenomena such as label bias are appropriately acknowledged as a source of unfairness when designing fair models, only to be tacitly abandoned when evaluating them. We investigate fairness and accuracy, but this time under a variety of controlled conditions in which we vary the amount and type of bias. We find, under reasonable assumptions, that the tension between fairness and accuracy is illusive, and vanishes as soon as we account for these phenomena during evaluation. Moreover, our results are consistent with an opposing conclusion: fairness and accuracy are sometimes in accord. This raises the question, {\em might there be a way to harness fairness to improve accuracy after all?} Since most notions of fairness are with respect to the model's predictions and not the ground truth labels, this provides an opportunity to see if we can improve accuracy by harnessing appropriate notions of fairness over large quantities of {\em unlabeled} data with techniques like posterior regularization and generalized expectation. Indeed, we find that semi-supervision not only improves fairness, but also accuracy and has advantages over existing in-processing methods that succumb to selection bias on the training set.


Reviews: Unlocking Fairness: a Trade-off Revisited

Neural Information Processing Systems

The premise of the paper is that many other papers in the ML fairness literature assume that there is an inevitable trade-off between fairness and accuracy, often without adequate justification for this assumption, and that many papers either assume that the data itself is unbiased or at least do not explicitly their assumptions about the types of bias in the data. I am not fully convinced that the paper's characterization of previous work is accurate. In my view, most fairness papers typically work in one of the following two regimes: 1. The setting in which the learner has access to fair, correct ground truth. In this case, there is clearly no trade-off between fairness and accuracy; if the training and test data are fair, a perfectly accurate classifier would also be perfectly fair.


Unlocking Fairness: a Trade-off Revisited

Neural Information Processing Systems

The prevailing wisdom is that a model's fairness and its accuracy are in tension with one another. However, there is a pernicious {\em modeling-evaluating dualism} bedeviling fair machine learning in which phenomena such as label bias are appropriately acknowledged as a source of unfairness when designing fair models, only to be tacitly abandoned when evaluating them. We investigate fairness and accuracy, but this time under a variety of controlled conditions in which we vary the amount and type of bias. We find, under reasonable assumptions, that the tension between fairness and accuracy is illusive, and vanishes as soon as we account for these phenomena during evaluation. Moreover, our results are consistent with an opposing conclusion: fairness and accuracy are sometimes in accord.


Unlocking Fairness: a Trade-off Revisited

Wick, Michael, panda, swetasudha, Tristan, Jean-Baptiste

Neural Information Processing Systems

The prevailing wisdom is that a model's fairness and its accuracy are in tension with one another. However, there is a pernicious {\em modeling-evaluating dualism} bedeviling fair machine learning in which phenomena such as label bias are appropriately acknowledged as a source of unfairness when designing fair models, only to be tacitly abandoned when evaluating them. We investigate fairness and accuracy, but this time under a variety of controlled conditions in which we vary the amount and type of bias. We find, under reasonable assumptions, that the tension between fairness and accuracy is illusive, and vanishes as soon as we account for these phenomena during evaluation. Moreover, our results are consistent with an opposing conclusion: fairness and accuracy are sometimes in accord. This raises the question, {\em might there be a way to harness fairness to improve accuracy after all?}